---
title: Experimentos Tesis
author: Diego Cruz Aguilar
date: 8/11/2024
date-modified: now
date-format: long
filters:
- pseudocode
pseudocode:
caption-prefix: "Algorithm"
reference-prefix: "Algorithm"
caption-number: true
toc: true
number-sections: true
execute:
echo: true
cache: true
warning: false
format:
html:
code-fold: true
code-tools: true
code-copy: false
include-in-header:
text: |
<script>
MathJax = {
loader: {
load: ['[tex]/boldsymbol']
},
tex: {
tags: "all",
inlineMath: [['$','$'], ['\\(','\\)']],
displayMath: [['$$','$$'], ['\\[','\\]']],
processEscapes: true,
processEnvironments: true,
packages: {
'[+]': ['boldsymbol']
}
}
};
</script>
<script src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml-full.js" type="text/javascript"></script>
jupyter: python3
---
{{< include mimic_one_to_many.qmd >}}
# Resultados
## Comparativa de los resultados(distintas formas de combinar y extraer las características de las señales)
```{python}
df_ranking = pd.read_csv("./ranking_short.csv")
df_best = df_ranking.sort_values(by='Cross-Validation Accuracy', ascending=False)
best_combination_per_feature = df_best.loc[df_best.groupby('Feature Extraction')['Cross-Validation Accuracy'].idxmax()]
show(df_best)
```
```{python}
fig = go.Figure()
fig.add_trace(
go.Bar(
x=best_combination_per_feature['Feature Extraction'],
y=best_combination_per_feature['Cross-Validation Accuracy'],
marker=dict(
colorscale='Viridis',
color=best_combination_per_feature['Cross-Validation Accuracy'],
colorbar=dict(title="Cross-Validation Accuracy"),
showscale=True
),
text=best_combination_per_feature['Model'],
name="Feature Extraction",
hoverinfo='x+text+y',
)
)
fig.update_layout(
xaxis_title="Feature Extraction",
yaxis_title="Cross-Validation Accuracy"
)
fig.show()
```
## Indicadores de los resultados
```{python}
df_results = pd.DataFrame(results)
df_results.to_csv("One-to-Many-Results.csv", index=False)
df_ordenado = df_results.sort_values(by='accuracy', ascending=False)
show(df_ordenado)
```
## Precisión de los clasificadores
```{python}
fig = go.Figure()
unique_models = df_results['model'].unique()
for name in unique_models:
model_data = df_results[df_results['model'] == name]
fig.add_trace(
go.Scatter(
x=model_data.index + 1,
y=model_data['accuracy'],
mode="lines+markers",
name=name,
)
)
fig.update_layout(
title="Comparación de Accuracy entre Clasificadores",
xaxis_title="Número de Prueba",
yaxis_title="Accuracy",
template="plotly_white",
)
fig.show()
```
## Comparación con el estado del arte
```{python}
mejor_modelo = df_results.loc[df_results['accuracy'].idxmax()]
models_accuracies = {
"[1]": .98,
"[2] m": .96,
"[2] s": .9073,
"[3]": .935,
"[4]": .921,
"[5]": .949,
f"Ours ({mejor_modelo['model']})": mejor_modelo['accuracy'],
}
models = list(models_accuracies.keys())
accuracies = list(models_accuracies.values())
fig = go.Figure(data=[
go.Scatter(
x=models,
y=accuracies,
mode='markers+lines',
marker=dict(size=10, color=accuracies, colorscale='Viridis', showscale=True),
line=dict(dash='solid'),
name='Accuracy'
)
])
fig.update_layout(
title="Model Accuracies",
xaxis_title="Models",
yaxis_title="Accuracy",
yaxis=dict(range=[0.9, 1.0]),
template="plotly_white"
)
fig.show()
```